Although weakly-supervised techniques can reduce the labeling effort, it is unclear whether a saliency model trained with weakly-supervised data (e.g., point annotation) can achieve the equivalent performance of its fully-supervised version. This paper attempts to answer this unexplored question by proving a hypothesis: there is a point-labeled dataset where saliency models trained on it can achieve equivalent performance when trained on the densely annotated dataset. To prove this conjecture, we proposed a novel yet effective adversarial trajectory-ensemble active learning (ATAL). Our contributions are three-fold: 1) Our proposed adversarial attack triggering uncertainty can conquer the overconfidence of existing active learning methods and accurately locate these uncertain pixels. {2)} Our proposed trajectory-ensemble uncertainty estimation method maintains the advantages of the ensemble networks while significantly reducing the computational cost. {3)} Our proposed relationship-aware diversity sampling algorithm can conquer oversampling while boosting performance. Experimental results show that our ATAL can find such a point-labeled dataset, where a saliency model trained on it obtained $97\%$ -- $99\%$ performance of its fully-supervised version with only ten annotated points per image.
translated by 谷歌翻译
Performing 3D dense captioning and visual grounding requires a common and shared understanding of the underlying multimodal relationships. However, despite some previous attempts on connecting these two related tasks with highly task-specific neural modules, it remains understudied how to explicitly depict their shared nature to learn them simultaneously. In this work, we propose UniT3D, a simple yet effective fully unified transformer-based architecture for jointly solving 3D visual grounding and dense captioning. UniT3D enables learning a strong multimodal representation across the two tasks through a supervised joint pre-training scheme with bidirectional and seq-to-seq objectives. With a generic architecture design, UniT3D allows expanding the pre-training scope to more various training sources such as the synthesized data from 2D prior knowledge to benefit 3D vision-language tasks. Extensive experiments and analysis demonstrate that UniT3D obtains significant gains for 3D dense captioning and visual grounding.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
In RGB-D based 6D pose estimation, direct regression approaches can directly predict the 3D rotation and translation from RGB-D data, allowing for quick deployment and efficient inference. However, directly regressing the absolute translation of the pose suffers from diverse object translation distribution between the training and testing datasets, which is usually caused by the diversity of pose distribution of objects in 3D physical space. To this end, we generalize the pin-hole camera projection model to a residual-based projection model and propose the projective residual regression (Res6D) mechanism. Given a reference point for each object in an RGB-D image, Res6D not only reduces the distribution gap and shrinks the regression target to a small range by regressing the residual between the target and the reference point, but also aligns its output residual and its input to follow the projection equation between the 2D plane and 3D space. By plugging Res6D into the latest direct regression methods, we achieve state-of-the-art overall results on datasets including Occlusion LineMOD (ADD(S): 79.7%), LineMOD (ADD(S): 99.5%), and YCB-Video datasets (AUC of ADD(S): 95.4%).
translated by 谷歌翻译
基于视频的人重新识别(REID)旨在识别多个非重叠摄像机的给定的行人视频序列。为了汇总视频样本的时间和空间特征,引入了图神经网络(GNN)。但是,现有的基于图的模型(例如STGCN)在节点功能上执行\ textIt {mean}/\ textit {max boming}以获取图表表示,该图表忽略了图形拓扑和节点的重要性。在本文中,我们建议图形池网络(GPNET)学习视频检索的多粒度图表示,其中实现了\ textit {Graph boming layer},以简化图形。我们首先构建了一个多粒图,其节点特征表示由骨架学到的图像嵌入,并且在颞和欧几里得邻域节点之间建立了边缘。然后,我们实现多个图形卷积层以在图上执行邻域聚集。为了下图,我们提出了一个多头全注意图池(MHFAPOOL)层,该图集合了现有节点群集和节点选择池的优势。具体而言,MHFAPOOL将全部注意矩阵的主要特征向量作为聚合系数涉及每个汇总节点中的全局图信息。广泛的实验表明,我们的GPNET在四个广泛使用的数据集(即火星,dukemtmc-veneoreid,ilids-vid and Prid-2011)上实现了竞争结果。
translated by 谷歌翻译
现有的基于视频的人重新识别(REID)的方法主要通过功能提取器和功能聚合器来了解给定行人的外观特征。但是,当不同的行人外观相似时,外观模型将失败。考虑到不同的行人具有不同的步行姿势和身体比例,我们建议学习视频检索的外观功能之外的歧视性姿势功能。具体而言,我们实现了一个两分支的体系结构,以单独学习外观功能和姿势功能,然后将它们串联在一起进行推理。为了学习姿势特征,我们首先通过现成的姿势检测器检测到每个框架中的行人姿势,并使用姿势序列构建时间图。然后,我们利用复发图卷积网络(RGCN)来学习时间姿势图的节点嵌入,该姿势图设计了一种全局信息传播机制,以同时实现框内节点的邻域聚集,并在框架间图之间传递消息。最后,我们提出了一种由节点注意和时间注意的双重意见方法,以从节点嵌入中获得时间图表示,其中采用自我注意机制来了解每个节点和每个帧的重要性。我们在三个基于视频的REID数据集(即火星,Dukemtmc和Ilids-Vid)上验证了所提出的方法,其实验结果表明,学习的姿势功能可以有效地改善现有外观模型的性能。
translated by 谷歌翻译
基于模型的步态识别方法通常采用行人步行姿势来识别人类。但是,由于摄像头视图的改变,现有方法并未明确解决人类姿势的较大阶层差异。在本文中,我们建议通过通过低UPPER生成的对抗网络(Lugan)学习全级转换矩阵来为每个单视姿势样本生成多视图姿势序列。通过摄像机成像的先验,我们得出的是,跨视图之间的空间坐标满足了全级矩阵的线性转换,因此,本文采用了对抗性训练来从源姿势学习转换矩阵,并获得目标视图以获得目标。目标姿势序列。为此,我们实现了由图形卷积(GCN)层组成的发电机,完全连接(FC)层和两支分支卷积(CNN)层:GCN层和FC层编码源姿势序列和目标视图,然后是CNN分支最后,分别学习一个三角形基质和上三角基质,最后它们被乘以制定全级转换矩阵。出于对抗训练的目的,我们进一步设计了一个条件鉴别因子,该条件区分姿势序列是真实的还是产生的。为了启用高级相关性学习,我们提出了一个名为Multi尺度超图卷积(HGC)的插件播放模块,以替换基线中的空间图卷积层,该层可以同时模拟联合级别的部分,部分部分 - 水平和身体水平的相关性。在两个大型步态识别数据集(即CASIA-B和OUMVLP置位)上进行的广泛实验表明,我们的方法的表现优于基线模型,并以一个较大的边距基于基于姿势的方法。
translated by 谷歌翻译
单眼深度估计是计算机视觉社区的重要任务。尽管巨大的成功方法取得了出色的结果,但其中大多数在计算上都是昂贵的,并且不适用于实时推论。在本文中,我们旨在解决单眼深度估计的更实际的应用,该解决方案不仅应考虑精度,而且还应考虑移动设备上的推论时间。为此,我们首先开发了一个基于端到端学习的模型,其重量大小(1.4MB)和短的推理时间(Raspberry Pi 4上的27fps)。然后,我们提出了一种简单而有效的数据增强策略,称为R2 CROP,以提高模型性能。此外,我们观察到,只有一个单一损失术语训练的简单轻巧模型将遭受性能瓶颈的影响。为了减轻此问题,我们采用多个损失条款,在培训阶段提供足够的限制。此外,采用简单的动态重量重量策略,我们可以避免耗时的超参数选择损失项。最后,我们采用结构感知的蒸馏以进一步提高模型性能。值得注意的是,我们的解决方案在MAI&AIM2022单眼估计挑战中排名第二,Si-RMSE为0.311,RMSE为3.79,推理时间为37 $ ms $,在Raspberry Pi上进行了测试4.值得注意的是,我们提供了,我们提供了。挑战最快的解决方案。代码和模型将以\ url {https://github.com/zhyever/litedepth}发布。
translated by 谷歌翻译
我们介绍了一个新颖的联合学习框架FedD3,该框架减少了整体沟通量,并开放了联合学习的概念,从而在网络受限的环境中进行了更多的应用程序场景。它通过利用本地数据集蒸馏而不是传统的学习方法(i)大大减少沟通量,并(ii)将转移限制为一击通信,而不是迭代的多路交流来实现这一目标。 FedD3允许连接的客户独立提炼本地数据集,然后汇总那些去中心化的蒸馏数据集(通常以几个无法识别的图像,通常小于模型小于模型),而不是像其他联合学习方法共享模型更新,而是允许连接的客户独立提炼本地数据集。在整个网络上仅一次形成最终模型。我们的实验结果表明,FedD3在所需的沟通量方面显着优于其他联合学习框架,同时,根据使用情况或目标数据集,它为能够在准确性和沟通成本之间的权衡平衡。例如,要在具有10个客户的非IID CIFAR-10数据集上训练Alexnet模型,FedD3可以通过相似的通信量增加准确性超过71%,或者节省98%的通信量,同时达到相同的准确性与其他联合学习方法相比。
translated by 谷歌翻译
Twitter机器人检测是一项重要且有意义的任务。现有的基于文本的方法可以深入分析用户推文内容,从而实现高性能。但是,新颖的Twitter机器人通过窃取真正的用户的推文并用良性推文稀释恶意内容来逃避这些检测。这些新颖的机器人被认为以语义不一致的特征。此外,最近出现了利用Twitter图结构的方法,显示出巨大的竞争力。但是,几乎没有一种方法使文本和图形模式深入融合并进行了交互,以利用优势并了解两种方式的相对重要性。在本文中,我们提出了一个名为BIC的新型模型,该模型使文本和图形模式深入互动并检测到推文语义不一致。具体而言,BIC包含一个文本传播模块,一个图形传播模块,可分别在文本和图形结构上进行机器人检测,以及可证明有效的文本互动模块,以使两者相互作用。此外,BIC还包含一个语义一致性检测模块,以从推文中学习语义一致性信息。广泛的实验表明,我们的框架在全面的Twitter机器人基准上优于竞争基准。我们还证明了拟议的相互作用和语义一致性检测的有效性。
translated by 谷歌翻译